首页> 外文OA文献 >Response acquisition under targeted percentile schedules: a continuing quandary for molar models of operant behavior.
【2h】

Response acquisition under targeted percentile schedules: a continuing quandary for molar models of operant behavior.

机译:目标百分位进度表下的响应获取:操作行为的磨牙模型的连续难题。

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The number of responses rats made in a "run" of consecutive left-lever presses, prior to a trial-ending right-lever press, was differentiated using a targeted percentile procedure. Under the nondifferential baseline, reinforcement was provided with a probability of .33 at the end of a trial, irrespective of the run on that trial. Most of the 30 subjects made short runs under these conditions, with the mean for the group around three. A targeted percentile schedule was next used to differentiate run length around the target value of 12. The current run was reinforced if it was nearer the target than 67% of those runs in the last 24 trials that were on the same side of the target as the current run. Programming reinforcement in this way held overall reinforcement probability per trial constant at .33 while providing reinforcement differentially with respect to runs more closely approximating the target of 12. The mean run for the group under this procedure increased to approximately 10. Runs approaching the target length were acquired even though differentiated responding produced the same probability of reinforcement per trial, decreased the probability of reinforcement per response, did not increase overall reinforcement rate, and generally substantially reduced it (i.e., in only a few instances did response rate increase sufficiently to compensate for the increase in the number of responses per trial). Models of behavior predicated solely on molar reinforcement contingencies all predict that runs should remain short throughout this experiment, because such runs promote both the most frequent reinforcement and the greatest reinforcement per press. To the contrary, 29 of 30 subjects emitted runs in the vicinity of the target, driving down reinforcement rate while greatly increasing the number of presses per pellet. These results illustrate the powerful effects of local reinforcement contingencies in changing behavior, and in doing so underscore a need for more dynamic quantitative formulations of operant behavior to supplement or supplant the currently prevalent static ones.
机译:使用目标百分位数程序区分在试验结束的右杠杆压片之前“连续”运行的连续“左杠杆压片”中产生的大鼠反应数。在非差异基准下,无论试验进行何种试验,在试验结束时提供的补强概率为0.33。在这种情况下,这30名受试者中的大多数都进行了短期测试,该组的平均值约为3。接下来,将使用目标百分位进度表来区分运行长度,使其接近目标值12。如果当前运行比最近24次与目标在同一侧的运行中的运行中的67%更接近目标,则可以增强当前运行。当前运行。以这种方式对增强进行编程可将每个试验的总体增强概率保持在0.33,同时针对更接近目标12的运行提供差异化​​的增强。在此程序下,该组的平均运行增加到约10。运行接近目标长度即使差异化的回答在每个试验中产生了相同的加筋概率,降低了每个回答的加筋概率,没有增加总体加筋率,并且总体上却大幅度降低了它(即,在少数情况下,回复率增加到足以补偿以增加每个试验的答复数)。仅基于磨牙增强偶发事件的行为模型都预言,在整个实验过程中,行程应保持较短,因为这样的行程既可以促进最频繁的强化,又可以使每次按压获得最大的强化。相反,在发射的30个对象中,有29个在目标附近运行,从而降低了增强率,同时大大增加了每个小球的按压次数。这些结果说明了局部配偶意外事件在改变行为方面的强大影响,并且在此过程中强调了对操作行为的更动态量化公式的需求,以补充或取代当前流行的静态行为。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号